Published October 23, 2023
The National Science Foundation project known as FABRIC, an adaptive programmable research infrastructure for computer science and science applications, has completed installation of a unique network infrastructure connection called the TeraCore. It’s a ring spanning the continental U.S. that has reached a project milestone by demonstrating data transmission speeds of one trillion bits per second, or 1.2 Terabits per second (Tbps).
Previously FABRIC gained attention for its cross-continental infrastructure, connecting an open network of institutions, including minority serving institutions, by deploying a federated data fabric testbed configurable for individual and shared scientific use. But transmitting data at these speeds—as high as 12 times faster than previous speeds—hits a new mark.
“I’m very pleased to learn that the 1.2 Tbps TeraCore in FABRIC has been installed and is now operational,” said NSF Program Director for FABRIC Deep Medhi. “This will provide researchers with unprecedented capability in the FABRIC platform to push data-intensive research that avails the benefit of this capability.”
The novel network infrastructure is geared toward prototyping ideas for the future internet at scale. FABRIC currently has more than 800 users on the system performing cutting-edge experiments and at-scale research in the areas of networking, cybersecurity, distributed computing, storage, virtual reality, 5G, machine learning and science applications.
According to Frank Würthwein, director of the San Diego Supercomputer Center (SDSC) at UC San Diego, users now have the capability to test how their experiments run at much higher data rates, including developing endpoints that can source and sink, and protocols that can transfer data at up to 1.2 Tbps over continental distances. Additionally, while previously federated facilities were connected to FABRIC at 100 Gigabits per second (Gbps), with TeraCore becoming operational, the project team is now working to connect several federated facilities, including SDSC, at 400 Gbps.
“I’m excited for the opportunities that the new 1.2 Tbps FABRIC TeraCore ring brings,” said Würthwein. “In the near future, we expect to be able to peer SDSC’s compute and storage capabilities with the TeraCore at 400 Gbps by connecting to FABRIC in Los Angeles. This will allow FABRIC and the Prototype National Research Platform (PNRP) research communities access to unique sets of resources possessed by these platforms, including programmable Network Interface Controllers (NICs) and Field-Programmable Gate Arrays (FPGAs) in both platforms, hundreds of terabytes of Non-Volatile Memory Express (NVMe) drive capacity at PNRP, and many others.”
Ilya Baldin, FABRIC project director, noted that the advancement to 1.2 Tbps brings FABRIC a step closer to making academic research infrastructures more competitive with internet-scale companies. “The TeraCore ring opens the door for expanded academic network infrastructure experimentation, thereby accelerating vitally important innovation and discovery. Additionally, this development sets up FABRIC’s infrastructure for future expansion, allowing the possibility to further upgrade portions of the infrastructure as opportunities become available,” he said.
The TeraCore ring was built using spectrum from the fiber footprint of ESnet6, the cutting-edge, high-speed network operated by the Energy Sciences Network (ESnet) that connects the tens of thousands of scientific researchers at Department of Energy laboratories, user facilities and scientific instruments, as well as research and education facilities worldwide. “The scientific research community needs to be able to share, analyze and store data as fast and efficiently as possible to solve today’s scientific challenges. Advancements such as FABRIC’s TeraCore ring are a major step in this direction that we’re proud to have helped facilitate,” said ESnet Executive Director Inder Monga.
The FABRIC infrastructure includes the development sites at the Renaissance Computing Institute/UNC-Chapel Hill, University of Kentucky, and Lawrence Berkeley National Laboratory, and the production sites at Clemson University, University of California San Diego, Florida International University, University of Maryland/Mid-Atlantic Crossroad, University of Utah, University of Michigan, University of Massachusetts Amherst/Massachusetts Green High Performance Computing Center, Great Plains Network, National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, and Texas Advanced Computing Center. FABRIC TeraCore uses optical equipment from Ciena and Infinera and networking equipment from Cisco.
FABRIC is supported in part by a Mid-Scale RI-1 NSF award (grant no. 1935966) and the core team consists of researchers from the Renaissance Computing Institute (RENCI) at UNC-Chapel Hill, University of Illinois-Urbana Champaign (UIUC), University of Kentucky (UK), Clemson University, Energy Sciences Network (ESnet) at Lawrence Berkeley National Laboratory (Berkeley Lab) and Virnao, LLC.
If interested, contact the team at info@fabric-testbed.net to start a conversation around getting your facility connected to the FABRIC infrastructure.
Share